=============================================================================== Emulex NIC Device Driver for Windows =============================================================================== Device Driver Version: 12.0.1195.0 Supported On: Lenovo System x Rack and Flex Problems Fixed: - None Incremental Interoperability: - Purley System Supported - Windows 2016 Support Known Issues: - None =============================================================================== Device Driver Version: 11.4.1185.0 Supported On: Lenovo System x Rack and Flex Problems Fixed: - [Lenovo 102888] Mezz card CN4052S name showed inconsistent between the LXCC WEB UI and OS (205168) Incremental Interoperability: - None Known Issues: - None =============================================================================== Device Driver Version: 11.2.1153.13-20 Supported On: Lenovo System x Rack and Flex Problems Fixed: - WS2016: Req to add PD debug information in sestats and OneCapture - Update VLAN description in Driver Property - WS2016: Hitting BSOD after enabling VMMQ on SET Vswitch and powering on the VM. - occfg.exe/becfg.exe for nano server. Incremental Interoperability: - Windows 2016 Support Known Issues: - Beginning with software release 11.2, Emulex LightPulse adapters and OneConnect adapters have independent software kits. The "Broadcom Software Kit Migration Guide" document provides special instructions and considerations for using the driver kits for LightPulse and OneConnect adapters. https://docs.broadcom.com/docs/12378907 =============================================================================== Device Driver Version: 11.1.145.24 Supported On: Lenovo System x Rack and Flex Problems Fixed: - None Incremental Interoperability: - Flex System CN4052S 2-port 10Gb Virtual Fabric Adapter Advanced - Flex System CN4054S 4-port 10Gb Virtual Fabric Adapter Advanced - Emulex VFA5.2 ML2 2x10 GbE SFP+ Adapter and FCoE/iSCSI SW - Sign Windows 2008 R2 and 2012 drivers with both SHA1 and SHA2 Known Issues: - RoCE support is limited to named applications. Contact your local OEM Sales Support for more information. - There is no benefit from using NVGRE on more than one vNIC attached to the same physical port. As such, only one vSwitch should be attached to a physical port. Attaching more than one vSwitch to a physical port could cause a dramatic decrease in performance. This is a permanent limitation =============================================================================== Device Driver Version: 11.0.243.18 Supported On: Lenovo System x Rack and Flex Problems Fixed: - Driver causes OS to crash after resume through WoL (181093) - Help string missing for "RoCE Mode" option in driver's proprty page (183699) Incremental Interoperability: - Implement Packet Direct feature (173938) - Statistics for Windows RDMA (RoCEv2 vs. RoCEv1, Congestion Mgmt...) (176587) Known Issues: - RoCE support is limited to named applications. Contact your local OEM Sales Support for more information. - There is no benefit from using NVGRE on more than one vNIC attached to the same physical port. As such, only one vSwitch should be attached to a physical port. Attaching more than one vSwitch to a physical port could cause a dramatic decrease in performance. This is a permanent limitation =============================================================================== Device Driver Version: 10.6.236.0 Supported On: Lenovo System x Rack and Flex Problems Fixed: - Incremental Interoperability: - RoCE v2 with XE-104 P2 based - Flex System CN4052 R 2-port 10Gb Virtual Fabric Adapter - Lenovo 00AG572 VFA5 2x10GbE SFP+ PCIe Adapter - Lenovo 00AG582 VFA5 2x10GbE SFP+ PCIe FCoE/iSCSI Adapter - Lenovo 00AG562 VFA5 ML2 2x10GbE SFP+ Adapter Known Issues: - RoCE support is limited to named applications. Contact your local OEM Sales Support for more information. - There is no benefit from using NVGRE on more than one vNIC attached to the same physical port. As such, only one vSwitch should be attached to a physical port. Attaching more than one vSwitch to a physical port could cause a dramatic decrease in performance. This is a permanent limitation =============================================================================== Device Driver Version: 10.4.255.23 Supported On: System x Rack Problems Fixed: - OCe14000 with LRO enabled causes OS to reboot - Observing "IOCTL to firmware failed" in the Diagnostics tab after installing the Driver to the Virtual Function Incremental Interoperability: - Lenovo branded versions of formerly IBM branded products Known Issues: - RoCE support is limited to named applications. Contact your local OEM Sales Support for more information. - There is no benefit from using NVGRE on more than one vNIC attached to the same physical port. As such, only one vSwitch should be attached to a physical port. Attaching more than one vSwitch to a physical port could cause a dramatic decrease in performance. This is a permanent limitation =============================================================================== Device Driver Version: 10.2.531.0 Supported On: Flex Problems Fixed: - BSOD when booting to windows 2012 R2 with HyperV with 4x CN4054 in system - Driver reports too many VMQ resources Incremental Interoperability: - Lenovo branded versions of formerly IBM branded products Known Issues: - RoCE support is limited to named applications. Contact your local OEM Sales Support for more information. - There is no benefit from using NVGRE on more than one vNIC attached to the same physical port. As such, only one vSwitch should be attached to a physical port. Attaching more than one vSwitch to a physical port could cause a dramatic decrease in performance. This is a permanent limitation =============================================================================== Device Driver Version: 10.2.413.1 Supported On: System x, BladeCenter and Flex Problems Fixed: - Packet loss observed during affinity changes. During the change, a 'pause' occurs, packets will get dropped, bandwidth utilization will drop because of the stall, and overall performance will be inconsistent. - When using multi-channel in a vSwitch environment, map one PF to the hyper-visor vSwitch and then use the other PF�s/vNICs for live migration network, FCoE/iSCSI, Management network and the like in order to get the best performance. Incremental Interoperability: - Grantley based ITE Known Issues: - RoCE support is limited to named applications. Contact your local IBM Sales Support for more information. - There is no benefit from using NVGRE on more than one vNIC attached to the same physical port. As such, only one vSwitch should be attached to a physical port. Attaching more than one vSwitch to a physical port could cause a dramatic decrease in performance. This is a permanent limitation =============================================================================== Device Driver Version: 10.2.370.18 Supported On: System x and BladeCenter Problems Fixed: - Network connectivity is getting lost for VMQ interfaces during VMQ CPU affinity stress test Incremental Interoperability: - None Known Issues: - RoCE support is limited to named applications. Contact your local IBM Sales Support for more information. - Packet loss observed during affinity changes. During the change, a 'pause' occurs, packets will get dropped, bandwidth utilization will drop because of the stall, and overall performance will be inconsistent. - When using multi-channel in a vSwitch environment, map one PF to the hyper-visor vSwitch and then use the other PF�s/vNICs for live migration network, FCoE/iSCSI, Management network and the like in order to get the best performance. - There is no benefit from using NVGRE on more than one vNIC attached to the same physical port. As such, only one vSwitch should be attached to a physical port. Attaching more than one vSwitch to a physical port could cause a dramatic decrease in performance. =============================================================================== Device Driver Version: 10.2.348.2 Supported On: IBM System x Problems Fixed: - BSOD after running IO's using NVGRE offload for 16VM's, also affects VMQ and SRIOV - Option to enable/disable RSS not visible in NIC advanced properties - Network connectivity is getting lost for VMQ interfaces - Performance and Fairness for 16VM's is uneven on Windows 2012 R2 Incremental Interoperability: - Grantley Rack Servers Known Issues: - RoCE support is limited to named applications. Contact your local IBM Sales Support for more information - Packet loss observed during affinity changes. During the change, a 'pause' occurs, packets will get dropped, bandwidth utilization will drop because of the stall, and overall performance will be inconsistent. - When using multi-channel in a vSwitch environment, map one PF to the hyper-visor vSwitch and then use the other PF�s/vNICs for live migration network, FCoE/iSCSI, Management network and the lie in order to get the best performance. - There is no benefit from using NVGRE on more than one vNIC attached to the same physical port. As such, only one vSwitch should be attached to a physical port. Attaching more than one vSwitch to a physical port could cause a dramatic decrease in performance. =============================================================================== Device Driver Version: 10.2.261.11 Supported On: IBM System x, BladeCenter, and Flex Problems Fixed: - When MAC learning is enabled, driver should not drop internally switched packets - Incorrect Port numbering in nic multichannel tab - No link down on UMC 0 bw min and max - BE3: very low throughput transmit on SR IOV Guest OS (netperf less than 100Mbps although 10Gb link) - BE3 Win2k12R2 Hyper v server crashed during the creation of virtual switch using the LACP teamed adapter - NIC Teaming (failover/failback) fails on HS23 LOM Incremental Interoperability: - ARI (PCI-SIG specification) - RDMA over CEE (RoCE) Known Issues: - RoCE support is limited to named applications. Contact your local IBM Sales Support for more information - When using VMQ with Windows Server 2012 or 2012R2, the user may experience VM connectivity loss, packet drops, system hangs, inability to shutdown VMs and possible system crashes on shutdown. =============================================================================== Device Driver Version: 10.0.718.26 Supported On: IBM System x Problems Fixed: - VMQ driver option incorrectly available for 1Gb ports of BE3 - BE2 and BE3: Hitting Nic initialization failure after updating driver - Update custom property page display for BE3R - Windows NIC driver always reports initial link status as UP Incremental Interoperability: - Emulex Dual port 10Gb SFP+ Ethernet Adapter Card (XE-102/XE-104 based) Known Issues: - None =============================================================================== Device Driver Version: 4.6.203.1 Supported On: IBM System x, BladeCenter, and Flex Problems Fixed: - HyperV VM's tagged VLAN's did not get an IP Address - NIC Function Properties > Status for PCI Express Link Speed says Invalid Link Speed detected. - iSCSI ports of CNA card in ESXi 4.1u3 OCM GUI (vCenter plug in) is not getting displayed Incremental Interoperability: - None Known Issues: - None =============================================================================== Device Driver Version: 4.6.142.8 Supported On: IBM System x and BladeCenter Problems Fixed: - None Incremental Interoperability: - PCI-IDs added for XE4310R controller applications - Hyper-V with Switch Independent Mode enabled Known Issues: - No known issues =============================================================================== Device Driver Version: 4.4.176.2 Supported On: IBM Flex Device Driver Version: 4.2.390.6 Supported On: IBM System x and BladeCenter Problems Fixed: - Cleaned up inf file - TOE Offload Fails when the same neighbor is added more than once Incremental Interoperability: - Windows 2012 Known Issues: - Switch Independent Mode not supported in a HyperV environment - Default values vary between inbox and out of box drivers. For example: Configuration section - Wake On LAN - Inbox default is �Disabled� � out-of-box default is �Enabled� Performance section - Maximum Number of RSS Queues - Inbox default is �4� - out-of-box default is �8� - Virtual Machine Queues - Inbox default is �Disabled� � out-of-box default is �Enabled� You may select the button to reset all driver settings to defaults after any installation. =============================================================================== Device Driver Version: 4.1.370.0 Supported On: IBM System x, BladeCenter, and Flex Incremental Interoperability: - Initial release for IBM Flex Problems Fixed: - HS23 - Sleep Stress test failed with bugcheck C4 during driver verify - TOE with RSC enabled causes Network to go down - Augmented WOL Capabilities - Driver parameter query failures - BSOD 9F during shutdown if device stops responding - RSS queues may change port during MPRestart and MPPause =============================================================================== Device Driver Version: 4.1.334.25 Supported On: IBM System x and BladeCenter Problems Fixed: - Emulex: Large Receive Offload support needed in Windows NIC driver - Win WoL: Need to report WoL based on ARM reported capabilities - update the driver strings for existing cards - RSS code may access PerMessage beyond end of array - 16 queue RSS for BE3 on Windows 2008 R2 - fix unverified mode of occfg to allow customer workarounds - 16 queue RSS lockup if SRIOV enabled in registry - Standard Property Page for 1 port Windows Azure card - Timeout RQ flush to avoid bugcheck during shutdown - Change receive buffer alignment to use Windows 2008 R2 TCPIP fast path - Emulex: BSOD - Replacing BE3 card with a BE2 card after installation of new kit driver and firmware on Win 2k3 SP2 R2x64 (vNIC enabled IBM machine) - be2nd62.inf failed ChkInf - VMQ registry keys displayed in Windows 2008 - be_function_prepare_nonembedded_ioctl overwrites version field - autoi reboot fails on Win2K8 x64 when verifier enabled - PCI ID: Updates for Endeavor 2 for IBM- PCI IDs Rev 1.22 - Emulex NIC driver causes BSoD during shutdown whenever disable firewall or join domain network - Sleep Stress test failed with bugcheck C4 during driver verify - Disable TOE by default for IBM - BSOD 9F during shutdown if device stops responding =============================================================================== Device Driver Version: 2.103.389.0 Supported On: IBM System x and BladeCenter - Initial Release ===============================================================================